- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0001001001000000
- More
- Availability
-
30
- Author / Contributor
- Filter by Author / Creator
-
-
Dhurandhar, Amit (3)
-
Afroogh, Saleh (1)
-
Atkinson, David (1)
-
Chao, Hanqing (1)
-
Chen, Kevin (1)
-
Chen, Pin-Yu (1)
-
Gao, Tia (1)
-
Ji, Qiang (1)
-
Jiao, Junfeng (1)
-
Tajer, Ali (1)
-
Wang, Hanjing (1)
-
Xu, Yangyang (1)
-
Yan, Pingkun (1)
-
Yin, Naiyu (1)
-
Zhang, Jiajin (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
The pursuit of generalizable representations in the realm of machine learning and computer vision is a dynamic field of research. Typically, current methods aim to secure invariant representations by either harnessing domain expertise or leveraging data from multiple domains. In this paper, we introduce a novel approach that involves acquiring Causal Markov Blanket (CMB) representations to improve prediction performance in the face of distribution shifts. Causal Markov Blanket representations comprise the direct causes and effects of the target variable, rendering them invariant across diverse domains. To elaborate, our approach commences with the introduction of a novel structural causal model (SCM) equipped with latent representations, designed to capture the underlying causal mechanisms governing the data generation process. Subsequently, we propose a CMB representation learning framework that derives representations conforming to the proposed SCM. In comparison to state-of-the-art domain generalization methods, our approach exhibits robustness and adaptability under distribution shiftsmore » « less
-
Jiao, Junfeng; Afroogh, Saleh; Chen, Kevin; Atkinson, David; Dhurandhar, Amit; Afroogh, Saleh (, Harvard Dataverse)AGGA (Academic Guidelines for Generative AIs) is a dataset of 80 academic guidelines for the usage of generative AIs and large language models in academia, selected systematically and collected from official university websites across six continents. Comprising 181,225 words, the dataset supports natural language processing tasks such as language modeling, sentiment and semantic analysis, model synthesis, classification, and topic labeling. It can also serve as a benchmark for ambiguity detection and requirements categorization. This resource aims to facilitate research on AI governance in educational contexts, promoting a deeper understanding of the integration of AI technologies in academia.more » « less
-
Zhang, Jiajin; Chao, Hanqing; Dhurandhar, Amit; Chen, Pin-Yu; Tajer, Ali; Xu, Yangyang; Yan, Pingkun (, Proceedings of the AAAI Conference on Artificial Intelligence)Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions. This paper considers a more realistic yet more challenging scenario, namely Single Domain Generalization (Single-DG), where only a single source domain is available for training. To tackle this challenge, we first try to understand when neural networks fail to generalize? We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as model sensitivity. Based on our analysis, we propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies. Models trained with these hard-to-learn samples can effectively suppress the sensitivity in the frequency space, which leads to improved generalization performance. Extensive experiments on multiple public datasets demonstrate the superiority of our approach, which surpasses the state-of-the-art single-DG methods by up to 2.55%. The source code is available at https://github.com/DIAL-RPI/Spectral-Adversarial-Data-Augmentation.more » « less
An official website of the United States government

Full Text Available